A Note on k-support Norm Regularized Risk Minimization
نویسنده
چکیده
The k-support norm has been recently introduced to perform correlated sparsity regularization [1]. Although Argyriou et al. only reported experiments using squared loss, here we apply it to several other commonly used settings resulting in novel machine learning algorithms with interesting and familiar limit cases. Source code for the algorithms described here is available from https://github.com/blaschko/ ksupport.
منابع مشابه
Efficient k-Support-Norm Regularized Minimization via Fully Corrective Frank-Wolfe Method
The k-support-norm regularized minimization has recently been applied with success to sparse prediction problems. The proximal gradient method is conventionally used to minimize this composite model. However it tends to suffer from expensive iteration cost thus the model solving could be time consuming. In our work, we reformulate the k-support-norm regularized formulation into a constrained fo...
متن کاملl2, 1 Regularized correntropy for robust feature selection
In this paper, we study the problem of robust feature extraction based on l2,1 regularized correntropy in both theoretical and algorithmic manner. In theoretical part, we point out that an l2,1-norm minimization can be justified from the viewpoint of half-quadratic (HQ) optimization, which facilitates convergence study and algorithmic development. In particular, a general formulation is accordi...
متن کاملSparse Support Vector Infinite Push
In this paper, we address the problem of embedded feature selection for ranking on top of the list problems. We pose this problem as a regularized empirical risk minimization with p-norm push loss function (p = ∞) and sparsity inducing regularizers. We leverage the issues related to this challenging optimization problem by considering an alternating direction method of multipliers algorithm whi...
متن کاملAn accelerated proximal gradient algorithm for nuclear norm regularized least squares problems
The affine rank minimization problem, which consists of finding a matrix of minimum rank subject to linear equality constraints, has been proposed in many areas of engineering and science. A specific rank minimization problem is the matrix completion problem, in which we wish to recover a (low-rank) data matrix from incomplete samples of its entries. A recent convex relaxation of the rank minim...
متن کاملDistributed Stochastic Optimization of the Regularized Risk
Many machine learning algorithms minimize a regularized risk, and stochastic optimization is widely used for this task. When working with massive data, it is desirable to perform stochastic optimization in parallel. Unfortunately, many existing stochastic algorithms cannot be parallelized efficiently. In this paper we show that one can rewrite the regularized risk minimization problem as an equ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1303.6390 شماره
صفحات -
تاریخ انتشار 2013